Granular configuration automation
Granular configuration automation (GCA) is a specialized area in the field of configuration management which focuses on visibility and control of an IT environment's configuration and bill-of-material at the most granular level. This framework focuses on improving the stability of IT environments by analyzing granular information. It responds to the requirement to determine a threat level of an environment risk, and to allow IT organizations to focus on those risks with the highest impact on performance.[1] Granular configuration automation combines two major trends in configuration management: the move to collect detailed and comprehensive environment information and the growing utilization of automation tools.[2]
Driving factors
[edit]For IT personnel, IT systems have grown in complexity,[3] supporting a wider and growing range of technologies and platforms. Application release schedules are accelerating, requiring greater attention to more information.[4] The average Global 2000 firm has more than a thousand applications that their IT organization deploys and supports.[5] New technology platforms like cloud and virtualization offer benefits to IT with less server space, and energy savings, but complicate configuration management from issues like sprawl.[6] The need to ensure high availability and consistent delivery of business services have led many companies to develop automated configuration, change and release management processes.[7]
Downtime and system outages undermine the environments that IT professionals manage. Despite advances in infrastructure robustness, occasional hardware, software and database downtime occurs. Dun & Bradstreet reports that 49% of Fortune 500 companies experience at least 1.6 hours of downtime per week, translating into more than 80 hours annually.[8] The growing costs of downtime has provided IT organizations with ample evidence for the need to improve processes. A conservative estimate from Gartner pegs the hourly cost of downtime for computer networks at $42,000, so a company that suffers from worse than average downtime of 175 hours a year can lose more than $7 million per year.[9]
The demands and complexity of incident investigation have put further strain on IT professionals, where their current experience cannot address incidents to the scale of environments in their organizations. The incident may be captured, monitored and the results reported using standardized forms, most of the time even using a help-desk or trouble tickets software system to automate it and sometimes even a formal process methodology like ITIL. But the core activity is still handled by a technical specialist "nosing around" the system trying to "figure out" what is wrong based on previous experience and personal expertise.[10]
Potential applications
[edit]- Release validation – validating releases and mitigating the risk of production outages
- Incident prevention – identifying and alerting of undesired changes; hence avoiding costly environment incidents
- Incident investigation – pinpointing the root-cause of the incident and significantly cutting the time and effort spent on investigation
- Disaster recovery verification – the accurate validation of disaster recovery plans and eliminating surprises at the most vulnerable times
- Security – identifying deviations from security policy and best-practices
- Compliance – discovering non-compliant situations and providing a detailed audit trail
See also
[edit]- Business continuity
- Change management
- Cloud computing
- Configuration management
- Information technology
- IT service continuity
- ITIL
- Release management
- Seven tiers of disaster recovery
- Virtualization
References
[edit]- ^ Risk Management Broken in Many Organizations, says Gartner, Government Technology, "[1]"
- ^ Ken Jackson, The Dawning of the IT Automation Era, IT Business Edge.
- ^ Bob Violino, Reducing IT Complexity, Smart Enterprise.
- ^ Change, Configuration, and Release: What’s Really Driving Top Performance? Archived 2009-12-27 at the Wayback Machine, IT Process Institute.
- ^ Improving Application Quality by Controlling Application Infrastructure, Configuration Management Crossroads.
- ^ Cameron Sturdevant, How to Tame Virtualization Sprawl, eweek.
- ^ Challenges and Priorities for Fortune 1000 Companies.
- ^ How Much Does Downtime Really Cost?, Information Management.
- ^ How to quantify downtime, NetworkWorld.
- ^ Root Cause Analysis for IT Incidents Investigation, IT Toolbox.